ELON MUSK IS THE AL CAPONE OF TECHNOLOGY

Al Capone had a huge cadre of Fan-Boys who, just like Musk, who worshiped him and refused to see the crimes he did!

Elon Musk is More Dangerous than AI

August 4, 2014 AI, Friendly AI, General Intelligence, IBM Watson, Intelligence, Intelligence Explosion, Opinion, Risks, Siri, Unfriendly AI

I was enjoying a quiet weekend when my news feed started to pop up with stories such as Elon Musk warns us that human-level AI is ‘potentially more dangerous than nukes’ and Elon Musk: AI could be more dangerous than nukes. Wow! Now usually I am a huge fan of Mr. Musk, his approach to innovation, and many of the amazing things he and his teams have accomplished at Tesla and SpaceX for example. But I have to say that I strongly disagree with Mr. Musk about the dangers of AI. First and foremost, AI is not now nor will it ever be “more dangerous than nukes”. Second, and perhaps even more importantly, restricting AI research is in itself dangerous. Let me explain.

What Exactly Did Musk Say?

elon-musk

This series of articles was all generated by a couple of tweets posted by Musk to his Twitter account. I’ve included them below.

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.


 

Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable

He’s a bit vague here, unsurprising given that we are talking about just a few tweets, but let’s examine the details.

How Dangerous Are “Nukes” Anyway?

Mr. Musk’s exact statement was that AI is “Potentially more dangerous than nukes.” But just how dangerous are nuclear weapons anyway? Simply stated, nuclear weapons are extremely dangerous and constitute an existential threat to humanity and all life forms on Earth. So to be “more dangerous” than nukes, AI has to be really really dangerous. Only two nuclear weapons have been used in warfare and together they killed around 250,000 people and destroyed two cities. However the testing and manufacture of early nuclear devices was itself dangerous and entire populations were exposed to radiation. Many of the scientists working with early devices and materials were also exposed to radiation.  The total number of deaths from radiation exposure resulting from testing and manufacture of nuclear devices is unclear, but an estimate of 20,000 people worldwide dying from cancer as a result of nuclear testing wouldn’t be unreasonable  and might be quite low. Current nuclear arsenals however number in the thousands of warheads and the devices are also more powerful and destructive than the first devices.

Country Warheads active/total[nb 1] Date of first test CTBT status[5]
The five nuclear-weapon states under the NPT
 United States 1,920 / 7,315[3] 16 July 1945 (“Trinity“) Signatory
 Russia 1,600 / 8,000[3] 29 August 1949 (“RDS-1“) Ratifier
 United Kingdom 160 / 225[3] 3 October 1952 (“Hurricane“) Ratifier
 France 290 / 300[3] 13 February 1960 (“Gerboise Bleue“) Ratifier
 China n.a. / 250[3] 16 October 1964 (“596“) Signatory
Non-NPT nuclear powers
 India n.a. / 90–110[3] 18 May 1974 (“Smiling Buddha“) Non-signatory
 Pakistan n.a. / 100–120[3] 28 May 1998 (“Chagai-I“) Non-signatory
 North Korea n.a. / <10[3] 9 October 2006[6] Non-signatory
Undeclared nuclear powers
 Israel n.a. / 80,[3] 60-400[7][8] Unknown (suspected 22 September 1979) Signatory

Even if we only consider active warheads, there are around 3000 active nuclear devices currently in the world. The above table should make it clear that this is a very conservative estimate of the number of devices, i.e. the U.S. has 1,920 active warheads but 7,315 total. Russia has 1,600 active devices. according to one estimate, just 300 of these devices used against the United States it would cause 90 million casualties within 30 minutes. A U.S. strike against Russia would be expected to be similar in scope. Now we don’t really know what Musk is talking about when says “nukes” as this is a pretty imprecise terminology. Again he is just tweeting quickly here, and this is expected. But since he uses the plural I think he does not mean just one nuclear device. A full scale nuclear exchange, global thermonuclear war, would be expected to kill at least 200 million people in the first hour. This is a highly conservative estimate, and many more people would die from radiation, starvation and other causes within hours or days. It seems conceivable  that something approaching 1 billion people might die within 24 hours depending on the specifics. The map below gives you some idea of the effects of a limited nuclear strike on the U.S. with the resulting fall-out indicated with the darkest considered as lethal and the least dangerous fall-out zones colored yellow. (from FEMA-estimated primary counterforce targets for Soviet ICBMs circa 1990).

And this doesn’t include second or subsequent nuclear strikes. Both the U.S. and Russia have the capability and strategy to launch a second strike should such an exchange occur.

Beyond the immediate casualties and deaths from radiation, some scientists believe a full scale nuclear war could cause a “Nuclear Winter” which would possibly terminate all life on Earth. While the Nuclear WInter scenario is speculative, we don’t exactly know what would happen, we can clearly see that it would be very bad. At least 1 billion deaths and possibly ending all life on Earth is within the reach of existing and disclosed nuclear arsenals. So when Mr. Musk is saying AI is more dangerous than nuclear weapons, he is claiming that the technology will result in millions if not billions of human deaths. To summarize, nuclear weapons have probably killed 250,000 people or thereabouts and have the capability to kill pretty much everyone. It’s not exactly clear to me how AI can be more dangerous than killing everyone and all life on Earth, but if we just take my limited estimate above of 200 million in one hour, then to be more dangerous than nuclear weapons, an AI has to be able to kill more than 3.3 million humans per minute.  

Musk is making a very extraordinary claim here about how dangerous AI is.

What is Musk Really Worried About?

Again all we have is these two  tweets so perhaps it is a bit presumptuous to say exactly what Mr. Musk is thinking. However, he does mention reading Nick Bostrom’s recent book Superintelligence and his statement was made in the context of having just finished the book. So we can assume that he is talking about the scenarios presented in Bostrom’s book, and specifically the idea of an intelligence explosion leading to a system with a rapidly increasing intelligence that becomes superintelligent shortly after being created, that is, more intelligent than all humans combined.

In science fiction stories such as The Terminator, War Games, The Forbin Project, etc. the rise of artificial intelligence is depicted as dangerous. Bostrom’s book is technical nonfiction, but it essentially falls into this same mold. Various arguments are presented and discounted by Bostrom. But the entire book has a massive flaw. How realistic is this idea of an Intelligence Explosion? It’s not entirely clear.

To see why, consider that  I’m the best tic-tac-toe player in the world. I don’t care how smart you are, you can’t beat me at tic-tac-toe. No superintelligence is better at tic-tac-toe than I am no matter how vastly intelligent it is. With tic-tac-toe, it’s easy to see why. The game has a finite number of possible moves and once you know the best strategy you can’t lose. But consider a larger tic-toe game like 4×4, 5×5, 6×6, etc. You can also consider 3D tic-tac-toe and even higher dimensional games.

A superintelligence might be able to beat me at larger games, say a 100×100 tic-tac-toe game, or 3D tic-tac-toe. However, the game is still finite and so it is unclear that another intelligence, much smarter than the first, can play better. It depends on how much each player knows about the game as well as the specific properties of the game itself.  So depending on the environment in which we are acting, an unbounded intelligence explosion might not happen. An AI might get smart enough that being smarter wasn’t an immediate advantage and the cost of increasing intelligence could outweigh any advantage. Certainly humans tend to overestimate the advantages conferred by their own intelligence.

Intelligence vs. Autonomy

More importantly, autonomy is more dangerous than intelligence. Dangerousness requires the ability to do harm and therefore also the ability to act freely (autonomy).  It does not however require intelligence. A simple feedback loop could kill everyone on Earth and it would be dumber than the simplest flatworm. It is easy to see that intelligence is not the same as dangerousness and it isn’t even always correlated with it. Consider who  you would rather fight to the death:

a. an unarmed man with 170 IQ and both his arms tied behind his back or

b. a huge angry man with 70 IQ holding a blunt instrument.

You don’t have to be smart to be dangerous.

This is a pretty important point because in all of the recent discussion of the potential dangers of AI the discussion is focused around “intelligence” and “superintelligence”. But intelligence alone is not dangerous. For example, a super-intelligent system that is not autonomous but can only act with my permission is not dangerous to me. Further, in order to do harm, the system has to be able to act in the world or cause someone or something else to act. An appropriately isolated superintelligence also can’t hurt anyone. So the fear is really not about  only the rise of a super-human intelligent system, but also the rise of one that is autonomous and free so it can act in the world.

Sure, I agree, such a system could be highly dangerous and we’ve all seen the movies and read the books where this is what happens. However the focus on intelligence is misleading and in itself dangerous.

Elon Musk is more dangerous than AI because he is autonomous and free to act in the world.

Reality check: A machine that hunts and kills humans in large numbers wouldn’t need to be more intelligent than an insect. And  yes it would be super dangerous to make such machines especially if we add in the idea of self replication or self manufacture. Imagine an insect like killing machine that can build copies of itself from raw materials or repair itself from the spare parts of its fallen comrades. But notice that the relative intelligence or lack thereof has little to do with the danger of such a system. It is dangerous because it has the ability to kill you and is designed to do so. The fact that this machine can’t play chess, converse in English, or pass a Turing Test doesn’t change anything about its ability to kill.

The danger then is not that we will create an intelligence greater than our own, but that we will embody these intelligences into autonomous systems that can kill us either on purpose or by accident.

Preventing the Rise of Dangerous AI

Let’s accept for a moment Musk’s assertion that AI is potentially more dangerous than nuclear weapons. What should we do about this?

The best thing to do would be to keep the secrets of building an AI secret and to hide even the possible existence of such a machine from the world. But it’s already too late.

For a while that’s what we did with nuclear weapons. But then we detonated two of them, and the cat was out of the bag.

Since then, the U.S. and the world have created a vast security and surveillance apparatus largely devoted to managing and controlling nuclear weapons and materials. The operation of this apparatus is formalized through a series of complex agreements, arms control treaties, and it also includes the operation of vast technical systems and is supported by the involvement of a large number of people in multiple nations and organizations. Few people have any idea how vast and far reaching this apparatus is. The Snowden revelations will give you some idea and that isn’t the whole story.

And despite this almost omnipresent global security apparatus with vast financial resources, nations such as Pakistan, North Korea, Libya, and others were able to gain access to the technology to make nuclear weapons and various subsystems and in some cases they have demonstrated working weapons. This is despite the best efforts of the global security apparatus to prevent this exact outcome. We can’t secure nuclear weapons perfectly, so the idea that we can secure AI perfectly is at least in question. What would be required?

With nuclear weapons both the materials and designs are illegal to possess. Manufacture of a weapon requires both the raw materials, general scientific knowledge, and also specific design details and engineering knowledge. The details of working weapons are all highly classified and even just possessing them without permission will land you in prison. Imagine a similar security regime applied to AI. First, we’ll have to restrict the materials used to make AIs. Those would be computers and software tools like programming languages and compilers. Only  individuals working on classified projects with appropriate security clearances would be given access to them. Further, illegal possession of programmable computers or development tools would be a serious felony and would carry high criminal penalties. Surveillance and law enforcement would be involved and would act with extreme prejudice against anyone suspect of having these items or developing AI.

But creating an AI is something you can do at home on your personal computer. Even when you need a larger computational resource, these are now available on demand in the cloud or can be built fairly inexpensively from commercial components such as graphics accelerator cards. Restricting AI would mean restricting access to these tools and systems as well. Beyond this, the specific engineering knowledge associated with making dangerous AIs would have to be protected.

It would become illegal to implement or possibly even know about certain algorithms, areas of mathematics, etc. This idea isn’t unprecedented,  for example consider the efforts of the U.S. government to restrict knowledge of cryptography algorithms in the 1990s. Certain programs, i.e. those associated with classified weapon systems, are themselves classified and unauthorized possession of these codes is illegal. If we follow Musk’s argument, simply having access to AI software will become a crime. But would it be enough?

Some attention has been given to the notion that intelligence is an “emergent” phenomenon of the human neural system. This suggests that machine intelligence could also be an emergent phenomenon of an underlying system or network. Could a dangerous AI emerge unexpectedly from a safe system? Certainly if we build and field systems whose operation we don’t fully understand, unexpected events can transpire. Existing deep learning systems are examples of systems that work, but do so by a mechanism which humans don’t and possibly can’t really understand.

Some other researchers have focused on the creation of provably beneficial or provably friendly AIs. The notion here is to build AIs according to some rules or within certain constraints that ensure the resulting AI is friendly. But the idea is fundamentally flawed in a very deep way. First, defining “friendly” is a huge problem. Even seemingly friendly systems can become unfriendly if the context changes or when taken to extremes. Bostrom covers some of these scenarios in his book. But this is also true about simple maximizing systems that have poorly specified goals. It isn’t about AI per se.

Further, a friendly program can be modified or subverted to become unfriendly. Alternatively a seemingly friendly program might contain hidden functionality that is unfriendly. There is in general no way to prove that an arbitrary presented program is friendly and secure. In fact Rice’s Theorem, a not very well known result in computer science, states that in general this can’t be done. So we can’t know if a presented piece of software includes a dangerous AI or not simply by looking at it.

I don’t understand how one can assert that AI is so dangerous that it might destroy the human race and yet at the same time invest in it. But that is exactly what Mr. Musk is doing. Perhaps he will close Vicarious or use it to promote the security apparatus described above. But perhaps he just thinks the research they are doing at present is “safe”. The problem is that we don’t know what they are doing.

Moreover, a bad outcome might look nothing like The Terminator scenario, for example we might cede control over our lives to such systems giving them control over food production and our ability to survive. Consider a plausible future in which food production is entirely robotic and food is produced in technological vertical farms or similar systems. We might lose the ability to produce food ourselves, or forget how to repair or maintain the systems, and so on. The point here is that not all bad outcomes are obvious; they might start off looking like good ideas.

The security regime that would be required to secure AIs would be similar to that for nuclear weapons, but it would have vast negative consequences for our society and intellectual lives. Knowledge of computers and programming would be tightly controlled, rendering our economy less innovative and productive. All employees of AI companies would require high level security clearances. A lot of smart people would simply refuse to go through the hassle and our competitiveness would suffer severely.

Notably those who fear an AI doomsday aren’t the only people that want to limit your access to free general computation. The recording and media industries also would love to terminate the ability of citizens to compute arbitrary programs on unlocked machines.  The President of the United States just signed a law allowing Americans to unlock their phones. What is being proposed here is a vast leap backwards; you wouldn’t even be allowed to own a phone that could be unlocked. Cory Doctorow’s essays Lockdown: The War on General Purpose Computing and  The Coming Civil War on General Computing cover this subject nicely.

Consider instead of “copying”, we replace in Cory’s essays “creating dangerous AI”, e.g. “In short, they made unrealistic demands on reality and reality did not oblige them. Copying only got easier following the passage of these laws—copying will only ever get easier. Right now is as hard as copying will get.”

Right now is as hard as creating dangerous AIs will get.

But trying to stop people from creating them will end our ability to have a free and open society.

Of course Mr. Musk never says any of this and I have no idea if he has even considered this aspect of the issue. However if as he asserts AI is really more dangerous than nuclear weapons, you can see the immediate need for an appropriately significant security regime. This is implied by his assertion that AI is “potentially more dangerous than nukes” and it is a highly dangerous idea itself in my view.

Imagine a world in which it is illegal for citizens to own programmable machines or to own tools for programming. People would be surrounded by complex and intelligent systems, but they would have no idea how they worked and no way of accessing this knowledge. They would quite literally be prisoners of the matrix.

I think we need to move in the opposite direction, empowering more people to code and to understand code and how it works.

Ignoring the Real Benefits from AI

This is a case where the proposed cure is far worse than the disease. As I have argued above, intelligence isn’t dangerous by itself, but restricting the development of intelligence might be.

Beyond suggesting the need to control general purpose computation, perhaps the most dangerous aspect of Musk’s tweets is that he ignores entirely the possible benefits of AI. Again its easy to read too much into these tweets. I assume that a belief in the potential for good was in part his interest in Vicarious, and that Musk imagined that their technology could be used to create things to make people’s lives better.

He seems to have forgotten this part. See for example, The Promise of a Cancer Drug Developed by AI.

We are now just at this time starting to see some really interesting results from AI systems like Vicarious’ and IBM’s Watson. These systems have amazing potential applications in areas such a medicine and health care. They will help us live longer and be healthier. Banning or restricting AI research would  limit or restrict research into beneficial uses of AI in areas such as medicine, drug design, or aiding in developing longevity therapies. Imagine if we restricted humans from getting smarter too. It obviously makes no sense. If we succumb to this sort of fear mongering , and Musk is not alone in propagating it, we will lose all sorts of advantageous developments that rely on AI for their operation.

This is a clear case where the proactionary principle applies, especially if you agree that the existential risks of AI are being overstated here. AI might also help us end poverty, prevent wars, and more. You need to consider both the possible benefits and risks of the technology not only the risks to make a rational decision about it. Just like other technologies such as SpaceX’s rockets which are also sometimes known as “ballistic missiles”. So please Mr. Musk, let’s also talk about the potential vast benefits of AI for humanity and not just frighten people by talking about movie doomsday fantasy scenarios. Hyperboles and hyper-exaggerated risks do not move the debate forward. Please don’t support the rise of an even more pervasive and oppressive global security regime to control AI research and computing more generally. That idea is even more dangerous than AI.